-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential performance improvements #16
Conversation
Things I could do by eye, up until now:
Any further improvements will require some more fine-grained benchmarking. |
Focussed mainly on decoding, since we use that way more often than encoding anyway. |
Co-authored-by: Thibaut Vandervelden <thvdveld@vub.be>
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #16 +/- ##
==========================================
+ Coverage 87.00% 87.97% +0.97%
==========================================
Files 6 6
Lines 300 316 +16
==========================================
+ Hits 261 278 +17
+ Misses 39 38 -1 ☔ View full report in Codecov by Sentry. |
Oops. |
If we want more speed, we'll have to change the algorithm to use some fast Fourrier-equivalent of the DCT. Not currently in the mood for that. But I think with this PR, we probably have the fastest blurhash around :'-) We should probably make this WASM-compatible and make a demo, like https://github.com/fpapado/blurhash-rust-wasm did before. Cc @fpapado TL;DR: ~60% faster in decoding (8ms for 512x512, 80µs for 50x50), ~77% faster in encoding (688µs for Benchmark result dump
|
... let's not. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like you had fun, this is always welcome 🚀
I'm looking to find some improvements, currently no measurable effects. At least we're less frequently allocating!